18 October 2024
On October 18, the United Nations Office for Disarmament Affairs (UNODA), in partnership with the Stockholm International Peace Research Institute (SIPRI) and the European Union, hosted a UN General Assembly First Committee side event focused on responsible innovation in Artificial Intelligence (AI) for peace and security. The event, titled “Responsible AI for Peace and Security: Meeting the Moment in Tackling the Risks Presented by Misuse of Civilian AI,” convened more than 60 diverse stakeholders from academia, civil society, industry, and international organizations.
Opening Remarks:
Ms. Marketa Homolkova, Head of the Political Section for Disarmament and Non-Proliferation, EU Delegation in Geneva, and Ms. Radha Day, Chief of UNODA’s Regional Disarmament, Information, and Outreach Branch, opened the event. Ms. Homolkova discussed the EU’s support for disarmament projects, noting the need to adapt arms control efforts to emerging technologies and the growing threat of AI misuse in areas like chemical and biological weapons. Ms. Day emphasized that AI is a “hot area” in disarmament discussions, pointing to the need for multilateral engagement and expertise to navigate the challenges AI poses for international peace and security.
Main Discussion:
The panel explored three core questions posed by moderator Mr. Charles Ovink, UNODA Political Affairs Officer at UNODA: the impact of civilian AI on international peace and security, current governance efforts, and the ways to strengthen connections across governance structures.
Mr. Vincent Boulanin, Director of SIPRI’s AI Governance Program, outlined three risk categories: accidental risks from AI design failures, political risks from AI misuse, and physical risks, including from its integration into military robotics. He highlighted the EU’s Artificial Intelligence Act and the California AI Transparency Act as examples of significant recent regulatory advancements. Mr. Boulanin also mentioned the UN Secretary-General’s High-level Advisory Body on AI as a critical development for overseeing the impact of civilian AI applications, including for international peace and security.
Coming from the private sector, Mr. Chris Meserole, Executive Director of the Frontier Model Forum, discussed lessons learned from earlier technological advancements and stressed that companies are prepared to engage in responsible AI practices. He highlighted recent US government collaboration with major AI firms to promote principles of safety, security, and trust in AI development. This commitment, according to Mr. Meserole, illustrates the industry’s acknowledgment of the need for regulatory limits to ensure safe AI advancement.
Ms. Julia Stoyanovich, Associate Professor at New York University’s Tandon School of Engineering & Center for Data Science, addressed the challenges of automated AI safety. She argued for the importance of a cross-sector conversation to establish shared values in AI development. Stoyanovich noted that the responsibility for AI education lies with both developers and policymakers and that a balanced dialogue is essential to ensure safety and ethical considerations in AI systems.
Ms. Kerstin Vignard, Senior Analyst at Johns Hopkins University Applied Physics Lab, critiqued the notion that “innovation” is inherently fragile or beneficial, urging that AI development be guided by robust regulatory standards and that innovation must align with a common set of ethical values, particularly in military applications. Vignard noted that many researchers are unaware of governance interest in cooperation and underscored the need for policymakers to effectively communicate AI’s challenges.
Key Themes and Outcomes:
In response to questions on improving stakeholder collaboration, panelists agreed on the need for regulatory oversight and a multi-directional approach to AI literacy. Ms. Stoyanovich underscored the importance of education and a forward-looking perspective on AI impacts, while Mr. Meserole emphasized that regulatory literacy should be mutually understood by both the AI community and end users. Ms. Vignard stressed the importance of attracting technical expertise to support policy efforts and communicate AI’s complexities on an international level.
The event concluded with a call for ongoing dialogue and cooperation among stakeholders to ensure that AI innovation supports peace and security, reinforcing the UN’s commitment to responsible AI governance.
This side-event was part of the Promoting Responsible Innovation in Artificial Intelligence for Peace and Security project, made possible by the generous funding of the European Union. For more information please contact Mr. Charles Ovink, Political Affairs Officer, at charles.ovink@un.org